科学机器学习(SCIML)是对几个不同应用领域的兴趣越来越多的领域。在优化上下文中,基于SCIML的工具使得能够开发更有效的优化方法。但是,必须谨慎评估和执行实施优化的SCIML工具。这项工作提出了稳健性测试的推论,该测试通过表明其结果尊重通用近似值定理,从而确保了基于多物理的基于SCIML的优化的鲁棒性。该测试应用于一种新方法的框架,该方法在一系列基准测试中进行了评估,以说明其一致性。此外,将提出的方法论结果与可行优化的可行区域进行了比较,这需要更高的计算工作。因此,这项工作为保证在多目标优化中应用SCIML工具的稳健性测试提供了比存在的替代方案要低的计算努力。
translated by 谷歌翻译
味道是遵循社会趋势和行为的风味行业的焦点。新调味剂和分子的研究和开发在该领域至关重要。另一方面,自然风味的发展在现代社会中起着至关重要的作用。鉴于此,目前的工作提出了一个基于科学机器学习的新颖框架,以在风味工程和行业中解决新的问题。因此,这项工作带来了一种创新的方法来设计新的自然风味分子。评估了有关合成可及性,原子数以及与天然或伪天然产物的相似性的分子。
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
The intersection of ground reaction forces in a small, point-like area above the center of mass has been observed in computer simulation models and human walking experiments. This intersection point is often called a virtual pivot point (VPP). With the VPP observed so ubiquitously, it is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without a VPP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the VPP-typical intersection of ground reaction forces. We, therefore, conclude that a VPP is not necessary for upright, stable walking. The non-VPP gaits found are stable and successfully rejected step-down perturbations, which indicates that a VPP is not primarily responsible for locomotion robustness or postural stability. However, a collision-based analysis indicates that non-VPP gaits increased the potential for collisions between the vectors of the center of mass velocity and ground reaction forces during walking, suggesting an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already strongly challenge the existing explanation of the VPP's function and provide an alternative explanation.
translated by 谷歌翻译
We present a machine-learning framework to accurately characterize morphologies of Active Galactic Nucleus (AGN) host galaxies within $z<1$. We first use PSFGAN to decouple host galaxy light from the central point source, then we invoke the Galaxy Morphology Network (GaMorNet) to estimate whether the host galaxy is disk-dominated, bulge-dominated, or indeterminate. Using optical images from five bands of the HSC Wide Survey, we build models independently in three redshift bins: low $(0<z<0.25)$, medium $(0.25<z<0.5)$, and high $(0.5<z<1.0)$. By first training on a large number of simulated galaxies, then fine-tuning using far fewer classified real galaxies, our framework predicts the actual morphology for $\sim$ $60\%-70\%$ host galaxies from test sets, with a classification precision of $\sim$ $80\%-95\%$, depending on redshift bin. Specifically, our models achieve disk precision of $96\%/82\%/79\%$ and bulge precision of $90\%/90\%/80\%$ (for the 3 redshift bins), at thresholds corresponding to indeterminate fractions of $30\%/43\%/42\%$. The classification precision of our models has a noticeable dependency on host galaxy radius and magnitude. No strong dependency is observed on contrast ratio. Comparing classifications of real AGNs, our models agree well with traditional 2D fitting with GALFIT. The PSFGAN+GaMorNet framework does not depend on the choice of fitting functions or galaxy-related input parameters, runs orders of magnitude faster than GALFIT, and is easily generalizable via transfer learning, making it an ideal tool for studying AGN host galaxy morphology in forthcoming large imaging survey.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
Deep learning semantic segmentation algorithms have provided improved frameworks for the automated production of Land-Use and Land-Cover (LULC) maps, which significantly increases the frequency of map generation as well as consistency of production quality. In this research, a total of 28 different model variations were examined to improve the accuracy of LULC maps. The experiments were carried out using Landsat 5/7 or Landsat 8 satellite images with the North American Land Change Monitoring System labels. The performance of various CNNs and extension combinations were assessed, where VGGNet with an output stride of 4, and modified U-Net architecture provided the best results. Additional expanded analysis of the generated LULC maps was also provided. Using a deep neural network, this work achieved 92.4% accuracy for 13 LULC classes within southern Manitoba representing a 15.8% improvement over published results for the NALCMS. Based on the large regions of interest, higher radiometric resolution of Landsat 8 data resulted in better overall accuracies (88.04%) compare to Landsat 5/7 (80.66%) for 16 LULC classes. This represents an 11.44% and 4.06% increase in overall accuracy compared to previously published NALCMS results, including larger land area and higher number of LULC classes incorporated into the models compared to other published LULC map automation methods.
translated by 谷歌翻译
深度学习已在许多神经影像应用中有效。但是,在许多情况下,捕获与小血管疾病有关的信息的成像序列的数量不足以支持数据驱动的技术。此外,基于队列的研究可能并不总是具有用于准确病变检测的最佳或必需成像序列。因此,有必要确定哪些成像序列对于准确检测至关重要。在这项研究中,我们旨在找到磁共振成像(MRI)序列的最佳组合,以深入基于学习的肿瘤周围空间(EPV)。为此,我们实施了一个有效的轻巧U-NET,适用于EPVS检测,并全面研究了来自易感加权成像(SWI),流体侵入的反转恢复(FLAIR),T1加权(T1W)和T2的不同信息组合 - 加权(T2W)MRI序列。我们得出的结论是,T2W MRI对于准确的EPV检测最为重要,并且在深神经网络中掺入SWI,FLAIR和T1W MRI可能会使精度的提高无关。
translated by 谷歌翻译
招聘和大学录取等许多申请涉及申请人的评估和选择。这些任务在根本上是困难的,并且需要从多个不同方面(我们称为“属性”)结合证据。在这些应用程序中,申请人的数量通常很大,一个常见的做法是以分布式方式将任务分配给多个评估人员。具体而言,在经常使用的整体分配中,每个评估者都会分配申请人的子集,并要求评估其分配的申请人的所有相关信息。但是,这样的评估过程受到诸如错误校准的问题的约束(评估人员仅见一小部分申请人,并且可能没有良好的相对质量感)和歧视(评估者受到有关申请人无关的信息的影响)。我们确定基于属性的评估允许替代分配方案。具体而言,我们考虑分配每个评估者更多的申请人,但每个申请人的属性更少,称为分割分配。我们通过理论和实验方法比较了分段分配与几个维度的整体分配。我们在这两种方法之间建立了各种折衷方案,并确定一种方法在其中一种方法比另一种方法更准确地评估。
translated by 谷歌翻译